Task-oriented dialogue systems often assist users with personal or confidential matters. For this reason, the developers of such a system are generally prohibited from observing actual usage. So how can they know where the system is failing and needs more training data or new functionality? In this work, we study ways in which realistic user utterances can be generated synthetically, to help increase the linguistic and functional coverage of the system, without compromising the privacy of actual users. To this end, we propose a two-stage Differentially Private (DP) generation method which first generates latent semantic parses, and then generates utterances based on the parses. Our proposed approach improves MAUVE by 3.8$\times$ and parse tree node-type overlap by 1.4$\times$ relative to current approaches for private synthetic data generation, improving both on fluency and semantic coverage. We further validate our approach on a realistic domain adaptation task of adding new functionality from private user data to a semantic parser, and show gains of 1.3$\times$ on its accuracy with the new feature.
translated by 谷歌翻译
随着新趋势影响在线讨论,用户生成的社交媒体数据正在不断变化,从而导致社交媒体NLP应用程序的测试数据分布变化。此外,随着用户数据删除,培训数据通常可能会更改。当前的大多数NLP系统都是静态的,并且依赖固定培训数据。结果,他们无法在没有频繁,昂贵的重新训练的情况下适应时间变化 - 既包括测试分配变化又删除了培训数据。在本文中,我们通过纵向主题标签预测的任务来研究时间适应,并提出一种非参数技术作为一种简单但有效的解决方案:非参数分类器使用可以更新的数据存储器,以适应测试分配移位或培训数据删除,无需重新训练。我们发布了一个新的基准数据集,该数据集由2021年的713m推文以及它们的主题标签组成,分为连续的颞桶。我们将需要重新训练进行适应的参数神经主题标签分类和标签生成模型与非参数,无训练的密集检索方法进行了比较,该方法基于文本嵌入距离返回最近的邻居的主题标签。在我们的纵向Twitter数据集的实验中,我们发现密集的邻居检索的相对性能增益比测试集的最佳参数基线的相对性能增长率为64.12%,该测试集的表现出分布移位而不需要基于梯度的重新训练。此外,我们表明我们的数据存储方法特别适合动态删除的用户数据,并具有可忽略的计算成本和性能损失。我们的新颖基准数据集和实证分析可以支持未来对现实世界用户数据中AI系统部署时的重要挑战的研究。
translated by 谷歌翻译
Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the "pre-train and fine-tune" paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.
translated by 谷歌翻译
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data? Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks. In this work, we posit that prior attempts were inconclusive because they based their attack solely on the MLM's model score. We devise a stronger membership inference attack based on likelihood ratio hypothesis testing that involves an additional reference MLM to more accurately quantify the privacy risks of memorization in MLMs. We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves the AUC of prior membership inference attacks from 0.66 to an alarmingly high 0.90 level, with a significant improvement in the low-error region: at 1% false positive rate, our attack is 51X more powerful than prior work.
translated by 谷歌翻译
自然语言处理(NLP)技术可以使用人的话语来帮助诊断诸如抑郁症之类的医疗状况。抑郁症是一种严重的医学疾病,可能会对人们的感觉,思维和行为产生不利影响,这可能导致情绪和身体上的问题。由于此类数据的敏感性,需要采取隐私措施来使用此类数据处理和培训模型。在这项工作中,我们研究了差异隐私(DP)在集中式学习和联合学习(FL)设置中对培训上下文化语言模型(Bert,Albert,Roberta和Distilbert)的影响。我们提供有关如何私下培训NLP模型以及哪些架构和设置提供更理想的隐私公用事业权衡的见解。我们设想这项工作将用于未来的医疗保健和心理健康研究,以使病史保持私密。因此,我们提供了这项工作的开源实施。
translated by 谷歌翻译